When testing conditions differ from those represented in training data, so-called out-of-distribution (OOD) inputs can mar the reliability of black-box learned components in the modern robot autonomy stack. Therefore, coping with OOD data is an important challenge on the path towards trustworthy learning-enabled open-world autonomy. In this paper, we aim to demystify the topic of OOD data and its associated challenges in the context of data-driven robotic systems, drawing connections to emerging paradigms in the ML community that study the effect of OOD data on learned models in isolation. We argue that as roboticists, we should reason about the overall system-level competence of a robot as it performs tasks in OOD conditions. We highlight key research questions around this system-level view of OOD problems to guide future research toward safe and reliable learning-enabled autonomy.
translated by 谷歌翻译
化学成像技术的使用正在成为病理学传统方法的常规伴奏。重大的技术进步已经开发了这些下一代技术,以提供丰富的空间分辨,多维化学图像。数字病理学的兴起显着增强了这些成像方式与光学显微镜和免疫组织化学的协同作用,从而增强了我们对疾病生物学机制和进展的理解。诸如成像质量细胞术之类的技术提供了与数字病理技术结合使用的特定组件的标记的多维(多重)图像。这些强大的技术产生了大量的高维数据,在数据分析中构成了重大挑战。无监督的方法(例如聚类)是分析这些数据的一种有吸引力的方法,但是,它们需要选择参数,例如簇数。在这里,我们提出了一种方法,以自动数据驱动的方式估算簇数,使用深稀疏的自动编码器将数据嵌入较低的维空间。我们计算嵌入式空间中区域的密度,其中大多数是空的,使高密度区域能够被检测为离群值,并提供了簇数量的估计值。该框架提供了一种完全无监督和数据驱动的方法来分析多维数据。在这项工作中,我们使用45个多重成像质量细胞仪数据集演示了我们的方法。此外,我们的模型仅使用其中一个数据集进行培训,并且将学习的嵌入应用于其余44张图像,从而提供了有效的数据分析过程。最后,我们证明了我们的方法的高计算效率,这比通过计算总和平方距离作为群集数的函数估算的速度要快。
translated by 谷歌翻译
自2015年首次介绍以来,深入增强学习(DRL)方案的使用已大大增加。尽管在许多不同的应用中使用了使用,但他们仍然存在缺乏可解释性的问题。面包缺乏对研究人员和公众使用DRL解决方案的使用。为了解决这个问题,已经出现了可解释的人工智能(XAI)领域。这是各种不同的方法,它们希望打开DRL黑框,范围从使用可解释的符号决策树到诸如Shapley值之类的数值方法。这篇评论研究了使用哪些方法以及使用了哪些应用程序。这样做是为了确定哪些模型最适合每个应用程序,或者是否未充分利用方法。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
自我监督的学习(SSL)已成为无需人类注释而产生不变表示的流行方法。但是,通过在输入数据上利用先前的在线转换功能来实现所需的不变表示。结果,每个SSL框架都是针对特定数据类型(例如,视觉数据)定制的,如果将其用于其他数据集类型,则需要进行进一步的修改。另一方面,是一个通用且广泛适用的框架的自动编码器(AE),主要集中于缩小尺寸,不适合学习不变表示。本文提出了一个基于阻止退化解决方案的受限自我标签分配过程的通用SSL框架。具体而言,先前的转换函数被用无监督的对抗训练的训练过程得出,以实现不变表示。通过自我转化机制,可以从相同的输入数据生成成对的增强实例。最后,基于对比度学习的培训目标是通过利用自我标签分配和自我转化机制来设计的。尽管自我转化过程非常通用,但拟议的培训策略的表现优于基于AE结构的大多数最先进的表示方法。为了验证我们的方法的性能,我们对四种类型的数据进行实验,即视觉,音频,文本和质谱数据,并用四个定量指标进行比较。我们的比较结果表明,所提出的方法证明了鲁棒性并成功识别数据集中的模式。
translated by 谷歌翻译
在这项工作中,当图像特征与关联的非图像数据组合时,我们检查医学成像数据分类的性能增强。我们在仅使用患者元数据时使用仅使用图像特征时,我们将在分类任务中的八个最先进的深神经网络的性能进行比较。我们利用转移学习与想象成的网络直接用作特征提取器并在目标域上进行微调。我们的实验表明,在包含元数据并使用可解释性方法来确定哪些功能导致这些增强功能,可以显着提高性能。此外,我们的结果表明,自然医学成像(例如光学图像)的性能增强受益于直接使用预先训练的模型,而非自然图像(例如非成像数据的表示)受益于微调预先训练的网络。这些增强功能在计算时间中以可忽略的额外成本,因此是其他应用的实用方法。
translated by 谷歌翻译
Edge computing is changing the face of many industries and services. Common edge computing models offload computing which is prone to security risks and privacy violation. However, advances in deep learning enabled Internet of Things (IoTs) to take decisions and run cognitive tasks locally. This research introduces a decentralized-control edge model where most computation and decisions are moved to the IoT level. The model aims at decreasing communication to the edge which in return enhances efficiency and decreases latency. The model also avoids data transfer which raises security and privacy risks. To examine the model, we developed SAFEMYRIDES, a scene-aware ridesharing monitoring system where smart phones are detecting violations at the runtime. Current real-time monitoring systems are costly and require continuous network connectivity. The system uses optimized deep learning that run locally on IoTs to detect violations in ridesharing and record violation incidences. The system would enhance safety and security in ridesharing without violating privacy.
translated by 谷歌翻译
Cognitive Computing (COC) aims to build highly cognitive machines with low computational resources that respond in real-time. However, scholarly literature shows varying research areas and various interpretations of COC. This calls for a cohesive architecture that delineates the nature of COC. We argue that if Herbert Simon considered the design science is the science of artificial, cognitive systems are the products of cognitive science or 'the newest science of the artificial'. Therefore, building a conceptual basis for COC is an essential step into prospective cognitive computing-based systems. This paper proposes an architecture of COC through analyzing the literature on COC using a myriad of statistical analysis methods. Then, we compare the statistical analysis results with previous qualitative analysis results to confirm our findings. The study also comprehensively surveys the recent research on COC to identify the state of the art and connect the advances in varied research disciplines in COC. The study found that there are three underlaying computing paradigms, Von-Neuman, Neuromorphic Engineering and Quantum Computing, that comprehensively complement the structure of cognitive computation. The research discuss possible applications and open research directions under the COC umbrella.
translated by 谷歌翻译
Reading comprehension of legal text can be a particularly challenging task due to the length and complexity of legal clauses and a shortage of expert-annotated datasets. To address this challenge, we introduce the Merger Agreement Understanding Dataset (MAUD), an expert-annotated reading comprehension dataset based on the American Bar Association's 2021 Public Target Deal Points Study, with over 39,000 examples and over 47,000 total annotations. Our fine-tuned Transformer baselines show promising results, with models performing well above random on most questions. However, on a large subset of questions, there is still room for significant improvement. As the only expert-annotated merger agreement dataset, MAUD is valuable as a benchmark for both the legal profession and the NLP community.
translated by 谷歌翻译
Rigorous guarantees about the performance of predictive algorithms are necessary in order to ensure their responsible use. Previous work has largely focused on bounding the expected loss of a predictor, but this is not sufficient in many risk-sensitive applications where the distribution of errors is important. In this work, we propose a flexible framework to produce a family of bounds on quantiles of the loss distribution incurred by a predictor. Our method takes advantage of the order statistics of the observed loss values rather than relying on the sample mean alone. We show that a quantile is an informative way of quantifying predictive performance, and that our framework applies to a variety of quantile-based metrics, each targeting important subsets of the data distribution. We analyze the theoretical properties of our proposed method and demonstrate its ability to rigorously control loss quantiles on several real-world datasets.
translated by 谷歌翻译